Marshall Islands
Democratic or Authoritarian? Probing a New Dimension of Political Biases in Large Language Models
Piedrahita, David Guzman, Strauss, Irene, Schölkopf, Bernhard, Mihalcea, Rada, Jin, Zhijing
As Large Language Models (LLMs) become increasingly integrated into everyday life and information ecosystems, concerns about their implicit biases continue to persist. While prior work has primarily examined socio-demographic and left--right political dimensions, little attention has been paid to how LLMs align with broader geopolitical value systems, particularly the democracy--authoritarianism spectrum. In this paper, we propose a novel methodology to assess such alignment, combining (1) the F-scale, a psychometric tool for measuring authoritarian tendencies, (2) FavScore, a newly introduced metric for evaluating model favorability toward world leaders, and (3) role-model probing to assess which figures are cited as general role-models by LLMs. We find that LLMs generally favor democratic values and leaders, but exhibit increased favorability toward authoritarian figures when prompted in Mandarin. Further, models are found to often cite authoritarian figures as role models, even outside explicit political contexts. These results shed light on ways LLMs may reflect and potentially reinforce global political ideologies, highlighting the importance of evaluating bias beyond conventional socio-political axes. Our code is available at: https://github.com/irenestrauss/Democratic-Authoritarian-Bias-LLMs.
- North America > Cuba (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Middle East > Syria (0.14)
- (185 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Law (0.67)
- Government > Regional Government > Asia Government > Middle East Government (0.46)
The Download: four (still) big breakthroughs, and how our bodies fare in extreme heat
Plus: A CDC panel voted to recommend delaying the hepatitis B vaccine for babies. If you're a longtime reader, you probably know that our newsroom selects 10 breakthroughs every year that we think will define the future . This group exercise is mostly fun and always engrossing, with plenty of lively discussion along the way, but at times it can also be quite difficult. The 2026 list will come out on January 12--so stay tuned. In the meantime, we wanted to share some of the technologies from this year's reject pile, as a window into our decision-making process. These four technologies won't be on our 2026 list of breakthroughs, but all were closely considered, and we think they're worth knowing about.
- Asia > China (0.06)
- Pacific Ocean (0.05)
- Oceania > Marshall Islands (0.05)
- (3 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.49)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.31)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.31)
- Asia > China > Beijing > Beijing (0.08)
- Asia > Taiwan (0.07)
- South America > Venezuela (0.04)
- (12 more...)
- Media (1.00)
- Leisure & Entertainment > Sports (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- (3 more...)
Language Specific Knowledge: Do Models Know Better in X than in English?
Agarwal, Ishika, Bozdag, Nimet Beyza, Hakkani-Tür, Dilek
Often, multilingual language models are trained with the objective to map semantically similar content (in different languages) in the same latent space. In this paper, we show a nuance in this training objective, and find that by changing the language of the input query, we can improve the question answering ability of language models. Our contributions are two-fold. First, we introduce the term Language Specific Knowledge (LSK) to denote queries that are best answered in an "expert language" for a given LLM, thereby enhancing its question-answering ability. We introduce the problem of language selection -- for some queries, language models can perform better when queried in languages other than English, sometimes even better in low-resource languages -- and the goal is to select the optimal language for the query. Second, we introduce simple to strong baselines to test this problem. Additionally, as a first-pass solution to this novel problem, we design LSKExtractor to benchmark the language-specific knowledge present in a language model and then exploit it during inference. To test our framework, we employ three datasets that contain knowledge about both cultural and social behavioral norms. Overall, LSKExtractor achieves up to 10% relative improvement across datasets, and is competitive against strong baselines, while being feasible in real-world settings. Broadly, our research contributes to the open-source development (https://github.com/agarwalishika/LSKExtractor/tree/main) of language models that are inclusive and more aligned with the cultural and linguistic contexts in which they are deployed.
- Asia > Laos (0.28)
- Asia > South Korea (0.14)
- Asia > North Korea (0.14)
- (182 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
AI Diffusion in Low Resource Language Countries
Misra, Amit, Zamir, Syed Waqas, Hamidouche, Wassim, Becker-Reshef, Inbal, Ferres, Juan Lavista
Artificial intelligence (AI) is diffusing globally at unprecedented speed, but adoption remains uneven. Frontier Large Language Models (LLMs) are known to perform poorly on low-resource languages due to data scarcity. We hypothesize that this performance deficit reduces the utility of AI, thereby slowing adoption in Low-Resource Language Countries (LRLCs). To test this, we use a weighted regression model to isolate the language effect from socioeconomic and demographic factors, finding that LRLCs have a share of AI users that is approximately 20% lower relative to their baseline. These results indicate that linguistic accessibility is a significant, independent barrier to equitable AI diffusion.
- North America > The Bahamas (0.14)
- North America > United States > District of Columbia > Washington (0.05)
- South America > Venezuela (0.04)
- (186 more...)
Impact of clinical decision support systems (cdss) on clinical outcomes and healthcare delivery in low- and middle-income countries: protocol for a systematic review and meta-analysis
Jain, Garima, Bodade, Anand, Pati, Sanghamitra
Clinical decision support systems (CDSS) are used to improve clinical and service outcomes, yet evidence from low- and middle-income countries (LMICs) is dispersed. This protocol outlines methods to quantify the impact of CDSS on patient and healthcare delivery outcomes in LMICs. We will include comparative quantitative designs (randomized trials, controlled before-after, interrupted time series, comparative cohorts) evaluating CDSS in World Bank-defined LMICs. Standalone qualitative studies are excluded; mixed-methods studies are eligible only if they report comparative quantitative outcomes, for which we will extract the quantitative component. Searches (from inception to 30 September 2024) will cover MEDLINE, Embase, CINAHL, CENTRAL, Web of Science, Global Health, Scopus, IEEE Xplore, LILACS, African Index Medicus, and IndMED, plus grey sources. Screening and extraction will be performed in duplicate. Risk of bias will be assessed with RoB 2 (randomized trials) and ROBINS-I (non-randomized). Random-effects meta-analysis will be performed where outcomes are conceptually or statistically comparable; otherwise, a structured narrative synthesis will be presented. Heterogeneity will be explored using relative and absolute metrics and a priori subgroups or meta-regression (condition area, care level, CDSS type, readiness proxies, study design).
- Research Report > Strength High (1.00)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Evaluating Large Language Models for IUCN Red List Species Information
Large Language Models (LLMs) are rapidly being adopted in conservation to address the biodiversity crisis, yet their reliability for species evaluation is uncertain. This study systematically validates five leading models on 21,955 species across four core IUCN Red List assessment components: taxonomy, conservation status, distribution, and threats. A critical paradox was revealed: models excelled at taxonomic classification (94.9%) but consistently failed at conservation reasoning (27.2% for status assessment). This knowledge-reasoning gap, evident across all models, suggests inherent architectural constraints, not just data limitations. Furthermore, models exhibited systematic biases favoring charismatic vertebrates, potentially amplifying existing conservation inequities. These findings delineate clear boundaries for responsible LLM deployment: they are powerful tools for information retrieval but require human oversight for judgment-based decisions. A hybrid approach is recommended, where LLMs augment expert capacity while human experts retain sole authority over risk assessment and policy.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.70)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.93)
The land use-climate change-biodiversity nexus in European islands stakeholders
Moustakas, Aristides, Christoforidi, Irene, Zittis, George, Demirel, Nazli, Fois, Mauro, Zotos, Savvas, Gallou, Eirini, Stamatiadou, Valentini, Tzirkalli, Elli, Zoumides, Christos, Košić, Kristina, Christopoulou, Aikaterini, Dragin, Aleksandra, Łowicki, Damian, Gil, Artur, Almeida, Bruna, Chrysos, Panos, Balzan, Mario V., Mansoldo, Mark D. C., Ólafsdóttir, Rannveig, Ayhan, Cigdem Kaptan, Atay, Lutfi, Tase, Mirela, Stojanović, Vladimir, Ladičorbić, Maja Mijatov, Díaz, Juan Pedro, Expósito, Francisco Javier, Quiroga, Sonia, Cano, Miguel Ángel Casquet, Wang, Haoran, Suárez, Cristina, Manolaki, Paraskevi, Vogiatzakis, Ioannis N.
To promote climate adaptation and mitigation, it is crucial to understand stakeholder perspectives and knowledge gaps on land use and climate changes. Stakeholders across 21 European islands were consulted on climate and land use change issues affecting ecosystem services. Climate change perceptions included temperature, precipitation, humidity, extremes, and wind. Land use change perceptions included deforestation, coastal degradation, habitat protection, renewable energy facilities, wetlands, and others. Additional concerns such as invasive species, water or energy scarcity, infrastructure problems, and austerity were also considered. Climate and land use change impact perceptions were analysed with machine learning to quantify their influence. The predominant climatic characteristic is temperature, and the predominant land use characteristic is deforestation. Water-related problems are top priorities for stakeholders. Energy-related problems, including energy deficiency and issues with wind and solar facilities, rank high as combined climate and land use risks. Stakeholders generally perceive climate change impacts on ecosystem services as negative, with natural habitat destruction and biodiversity loss identified as top issues. Land use change impacts are also negative but more complex, with more explanatory variables. Stakeholders share common perceptions on biodiversity impacts despite geographic disparity, but they differentiate between climate and land use impacts. Water, energy, and renewable energy issues pose serious concerns, requiring management measures.
- North America > The Bahamas (0.14)
- Europe > Portugal > Lisbon > Lisbon (0.14)
- Europe > Portugal > Azores (0.04)
- (32 more...)
Is THIS Amelia Earhart's missing plane? Expedition this month will finally confirm if the 'Taraia Object' in a lagoon on Nikumaroro Island is her Lockheed Electra 10E
Shroud of Turin mystery deepens as surgeon spots hidden detail that points to Jesus' resurrection I was so happy after trying a trendy new cosmetic procedure. But 10 years later I suffered a devastating side effect... the doctor had lied I'm no longer sleeping with my husband - and never will again, says MOLLY RYDDELL. I love him, but counted down the moments until he climaxed. Then I couldn't bear it any more and the truth spilled out... so many women feel the same The'middle-class kinks' saving marriages: Wives reveal the eight buzzy sex trends that revived their lagging libidos - including the fantasy husbands are secretly obsessed with I'm a woman with autism... here are the signs you might be masking, even from yourself Lori Loughlin's husband Mossimo Giannulli seen with mystery brunette in tiny skirt day after shock split Body count from Houston's bayous rises as serial killer whispers grip city and residents are told: 'Be vigilant' Cake-faced 90s sitcom star looks unrecognizable as she ditches the heavy eyeshadow for an LA errand run can you guess who? Trump dollar coin design released by Treasury... and it's inspired by the most iconic political photo of the century I've loved Taylor Swift for years. Mystery deepens over Hulk Hogan's death as his widow faces fresh anguish Prison chief reveals exactly where Diddy could end up... and the one horrifying jail he MUST avoid Is THIS Amelia Earhart's missing plane?
- Europe > Italy > Piedmont > Turin Province > Turin (0.24)
- North America > Canada > Alberta (0.14)
- Oceania > Marshall Islands > Ratak Chain > Majuro Atoll > Majuro (0.04)
- (17 more...)
- Transportation > Air (1.00)
- Media > Television (1.00)
- Media > Music (1.00)
- (6 more...)
Mechanistic Interpretability with SAEs: Probing Religion, Violence, and Geography in Large Language Models
Simbeck, Katharina, Mahran, Mariam
Despite growing research on bias in large language models (LLMs), most work has focused on gender and race, with little attention to religious identity. This paper explores how religion is internally represented in LLMs and how it intersects with concepts of violence and geography. Using mechanistic interpretability and Sparse Autoencoders (SAEs) via the Neuronpedia API, we analyze latent feature activations across five models. We measure overlap between religion- and violence-related prompts and probe semantic patterns in activation contexts. While all five religions show comparable internal cohesion, Islam is more frequently linked to features associated with violent language. In contrast, geographic associations largely reflect real-world religious demographics, revealing how models embed both factual distributions and cultural stereotypes. These findings highlight the value of structural analysis in auditing not just outputs but also internal representations that shape model behavior.
- North America > United States > New York > New York County > New York City (0.28)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.14)
- (225 more...)